Upper Bounds for Error Rates Associated to Linear Combinations of Classifiers
نویسنده
چکیده
A practical and useful notion of weak dependence between many classifiers constructed with the same training data is introduced. It is shown that when (a) this weak dependence is rather low, and (b) the expected margins are large, exponential bounds on the true error rates can be achieved. Empirical results with randomized trees, and trees constructed via boosting and adaptive bagging, show that, to some extent, weak dependence is present in these type of trees.
منابع مشابه
Upper Bounds for Error Rates of Linear Combinations of Classifiers
ÐA useful notion of weak dependence between many classifiers constructed with the same training data is introduced. It is shown that if both this weak dependence is low and the expected margins are large, then decison rules based on linear combinations of these classifiers can achieve error rates that decrease exponentially fast. Empirical results with randomized trees and trees constructed via...
متن کاملEmpirical Margin Distributions and Bounding the Generalization Error of Combined Classifiers
We prove new probabilistic upper bounds on generalization error of complex classifiers that are combinations of simple classifiers. Such combinations could be implemented by neural networks or by voting methods of combining the classifiers, such as boosting and bagging. The bounds are in terms of the empirical distribution of the margin of the combined classifier. They are based on the methods ...
متن کاملALGEBRAIC NONLINEARITY IN VOLTERRA-HAMMERSTEIN EQUATIONS
Here a posteriori error estimate for the numerical solution of nonlinear Voltena- Hammerstein equations is given. We present an error upper bound for nonlinear Voltena-Hammastein integral equations, in which the form of nonlinearity is algebraic and develop a posteriori error estimate for the recently proposed method of Brunner for these problems (the implicitly linear collocation method)...
متن کاملNew Bounds and Approximations for the Error of Linear Classifiers
In this paper, we derive lower and upper bounds for the probability of error for a linear classifier, where the random vectors representing the underlying classes obey the multivariate normal distribution. The expression of the error is derived in the one-dimensional space, independently of the dimensionality of the original problem. Based on the two bounds, we propose an approximating expressi...
متن کاملMaximal Discrepancy vs. Rademacher Complexity for error estimation
The Maximal Discrepancy and the Rademacher Complexity are powerful statistical tools that can be exploited to obtain reliable, albeit not tight, upper bounds of the generalization error of a classifier. We study the different behavior of the two methods when applied to linear classifiers and suggest a practical procedure to tighten the bounds. The resulting generalization estimation can be succ...
متن کامل